04. Inference on the Jetson

Inference on the Jetson

You’ve trained a deep neural network and performed basic inference for visualization and testing on the host system with DIGITS. Now let’s see inference in action on the Jetson TX2. The following instructions are a subset of the instructions you can find in Nvidia’s Two Days to a Demo setup.

Cloning the Repo

To obtain the repository, navigate to a folder of your choosing on the Jetson. First, make sure git and cmake are installed locally:

$ sudo apt-get install git cmake

Then clone the jetson-inference repo:

$ git clone https://github.com/dusty-nv/jetson-inference

Configuring with cmake

When cmake is run, a special pre-installation script (CMakePreBuild.sh) is run and will automatically install any dependencies.

$ cd jetson-inference
$ mkdir build
$ cd build
$ cmake ../

Note: the cmake command will launch the CMakePrebuild.sh script which asks for sudo while making sure prerequisite packages have been installed on the Jetson. The script also downloads the network model snapshots from web services.

Compiling the Project

Make sure you are still in the jetson-inference/build directory, created above in step #2.

$ cd jetson-inference/build            # omit if pwd is already /build from above
$ make

Classifying Images with ImageNet

The imageNet object accepts an input image and outputs the probability for each class. As examples of using imageNet we provide a command-line interface called imagenet-console and a live camera program called imagenet-camera.

Using the Console Program on Jetson

First, try using the imagenet-console program to test imageNet recognition on some example images. It loads an image, uses TensorRT and the imageNet class to perform the inference, then overlays the classification and saves the output image.
After building, make sure your terminal is located in the aarch64/bin directory:

$ cd jetson-inference/build/aarch64/bin

Then, classify an example image with the imagenet-console program. imagenet-console accepts 2 command-line arguments: the path to the input image and path to the output image (with the class overlay printed).

$ ./imagenet-console orange_0.jpg output_0.jpg

$ ./imagenet-console granny_smith_1.jpg output_1.jpg

Next, we will use imageNet to classify a live video feed from the Jetson onboard camera.
The real time image recognition demo is also located in /aarch64/bin and is called imagenet-camera. It runs on the live camera stream and depending on user arguments, loads googlenet or alexnet with TensorRT. Choose one of the following:

$ ./imagenet-camera googlenet           # to run using googlenet
$ ./imagenet-camera alexnet             # to run using alexnet

The frames per second (FPS), classified object name from the video, and confidence of the classified object are printed to the openGL window title bar. By default the application can recognize up to 1000 different types of objects, since Googlenet and Alexnet are trained on the ILSVRC12 ImageNet database which contains 1000 classes of objects. The mapping of names for the 1000 types of objects, you can find included in the repo under data/networks/ilsvrc12_synset_words.txt

Note: by default, the Jetson's onboard CSI camera will be used as the video source. If you wish to use a USB webcam instead, change the DEFAULT_CAMERA define at the top of imagenet-camera.cpp to reflect the /dev/video V4L2 device of your USB camera. The model it's tested with is Logitech C920.

Live camera inference demo

Inference Engine On TX2